Extending Analogical Generalization with Near-Misses

نویسندگان

  • Matthew D. McLure
  • Scott E. Friedman
  • Kenneth D. Forbus
چکیده

Concept learning is a central problem for cognitive systems. Generalization techniques can help organize examples by their commonalities, but comparisons with non-examples, near-misses, can provide discrimination. Early work on near-misses required hand-selected examples by a teacher who understood the learner’s internal representations. This paper introduces Analogical Learning by Integrating Generalization and Near-misses (ALIGN) and describes three key advances. First, domain-general cognitive models of analogical processes are used to handle a wider range of examples. Second, ALIGN’s analogical generalization process constructs multiple probabilistic representations per concept via clustering, and hence can learn disjunctive concepts. Finally, ALIGN uses unsupervised analogical retrieval to find its own near-miss examples. We show that ALIGN out-performs analogical generalization on two perceptual data sets: (1) hand-drawn sketches; and (2) geospatial concepts from strategy-game maps.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Combining progressive alignment and near-misses to learn concepts from sketches

Learning to classify examples as concepts is an important challenge for cognitive science. In cognitive psychology analogical generalization, i.e., abstracting the common structure of highly similar examples, has been shown to lead to rapid learning. In AI, providing very similar negative examples (near-misses) has been shown to accelerate learning. This paper describes a model of concept learn...

متن کامل

Learning concepts from sketches via analogical generalization and near-misses

Modeling how concepts are learned from experience is an important challenge for cognitive science. In cognitive psychology, progressive alignment, i.e., comparing highly similar examples, has been shown to lead to rapid learning. In AI, providing negative examples (near-misses) that are very similar has been proposed as another way to accelerate learning. This paper describes a model of concept...

متن کامل

Inductive , analogical , and communicative generalization .

Three forms of inductive generalization statistical generalization, variation-based generalization and theory-carried generalization are insufficient concerning case-to-case generalization, which is a form of analogical generalization. The quality of case-to-case generalization needs to be reinforced by setting up explicit analogical argumentation. To evaluate analogical argumentation six crite...

متن کامل

A Constraint-based Induction Algorithm in FOL

We present a bottom-up generalization which builds the maximally general terms covering a positive example and rejecting negative examples in rst-order logic (FOL), i.e. in terms of Version Spaces, the set G. This algorithm is based on rewriting negative examples as constraints upon the generalization of the positive example at hand. The constraints space is partially ordered, inducing a partia...

متن کامل

Transfer Learning through Analogy in Games

70 AI MAGAZINE Effectively transferring previously learned knowledge to a new domain is one of the hallmarks of human intelligence. This is the objective of transfer learning, in which transferred knowledge guides the learning process in a broad range of new situations. In near transfer, the source and target domains are very similar and solutions can be transferred almost verbatim. In far tran...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015